113 research outputs found
Convex Optimization Based Bit Allocation for Light Field Compression under Weighting and Consistency Constraints
Compared with conventional image and video, light field images introduce the
weight channel, as well as the visual consistency of rendered view, information
that has to be taken into account when compressing the pseudo-temporal-sequence
(PTS) created from light field images. In this paper, we propose a novel frame
level bit allocation framework for PTS coding. A joint model that measures
weighted distortion and visual consistency, combined with an iterative encoding
system, yields the optimal bit allocation for each frame by solving a convex
optimization problem. Experimental results show that the proposed framework is
effective in producing desired distortion distribution based on weights, and
achieves up to 24.7% BD-rate reduction comparing to the default rate control
algorithm.Comment: published in IEEE Data Compression Conference, 201
Judging a video by its bitstream cover
Classifying videos into distinct categories, such as Sport and Music Video,
is crucial for multimedia understanding and retrieval, especially in an age
where an immense volume of video content is constantly being generated.
Traditional methods require video decompression to extract pixel-level features
like color, texture, and motion, thereby increasing computational and storage
demands. Moreover, these methods often suffer from performance degradation in
low-quality videos. We present a novel approach that examines only the
post-compression bitstream of a video to perform classification, eliminating
the need for bitstream. We validate our approach using a custom-built data set
comprising over 29,000 YouTube video clips, totaling 6,000 hours and spanning
11 distinct categories. Our preliminary evaluations indicate precision,
accuracy, and recall rates well over 80%. The algorithm operates approximately
15,000 times faster than real-time for 30fps videos, outperforming traditional
Dynamic Time Warping (DTW) algorithm by six orders of magnitude
Quantum multipartite maskers vs quantum error-correcting codes
Since masking of quantum information was introduced by Modi et al. in [PRL
120, 230501 (2018)], many discussions on this topic have been published. In
this paper, we consider relationship between quantum multipartite maskers
(QMMs) and quantum error-correcting codes (QECCs). We say that a subset of
pure states of a system can be masked by an operator into a
multipartite system \H^{(n)} if all of the image states of states
in have the same marginal states on each subsystem. We call such
an a QMM of . By establishing an expression of a QMM, we obtain a
relationship between QMMs and QECCs, which reads that an isometry is a QMM of
all pure states of a system if and only if its range is a QECC of any
one-erasure channel. As an application, we prove that there is no an isometric
universal masker from \C^2 into \C^2\otimes\C^2\otimes\C^2 and then the
states of \C^3 can not be masked isometrically into
\C^2\otimes\C^2\otimes\C^2. This gives a consummation to a main result and
leads to a negative answer to an open question in [PRA 98, 062306 (2018)].
Another application is that arbitrary quantum states of \C^d can be
completely hidden in correlations between any two subsystems of the tripartite
system \C^{d+1}\otimes\C^{d+1}\otimes\C^{d+1}, while arbitrary quantum states
cannot be completely hidden in the correlations between subsystems of a
bipartite system [PRL 98, 080502 (2007)].Comment: This is a revision about arXiv:2004.14540. In the present version,
and old Eq. (2.2) have been exchanged and the followed three
equations have been correcte
A Bayesian Approach to Block Structure Inference in AV1-based Multi-rate Video Encoding
Due to differences in frame structure, existing multi-rate video encoding
algorithms cannot be directly adapted to encoders utilizing special reference
frames such as AV1 without introducing substantial rate-distortion loss. To
tackle this problem, we propose a novel bayesian block structure inference
model inspired by a modification to an HEVC-based algorithm. It estimates the
posterior probabilistic distributions of block partitioning, and adapts early
terminations in the RDO procedure accordingly. Experimental results show that
the proposed method provides flexibility for controlling the tradeoff between
speed and coding efficiency, and can achieve an average time saving of 36.1%
(up to 50.6%) with negligible bitrate cost.Comment: published in IEEE Data Compression Conference, 201
- …